348 research outputs found

    Expressing social attitudes in virtual agents for social training games

    Full text link
    The use of virtual agents in social coaching has increased rapidly in the last decade. In order to train the user in different situations than can occur in real life, the virtual agent should be able to express different social attitudes. In this paper, we propose a model of social attitudes that enables a virtual agent to reason on the appropriate social attitude to express during the interaction with a user given the course of the interaction, but also the emotions, mood and personality of the agent. Moreover, the model enables the virtual agent to display its social attitude through its non-verbal behaviour. The proposed model has been developed in the context of job interview simulation. The methodology used to develop such a model combined a theoretical and an empirical approach. Indeed, the model is based both on the literature in Human and Social Sciences on social attitudes but also on the analysis of an audiovisual corpus of job interviews and on post-hoc interviews with the recruiters on their expressed attitudes during the job interview

    Analysis of Laughter in Cohesive Groups

    Get PDF
    Group cohesion describes the tendency of the group members’ shared commitment to group tasks and the interpersonal attraction among them. This paper presents a preliminary analysis of occurrence of laughter with respect to group cohesion using a corpus of multi-party interactions. Results indicate that the occurrence of laughter is higher in cohesive segments and a strong positive correlation exists between the perceived level of cohesion and laughter

    Rules for Responsive Robots: Using Human Interactions to Build Virtual Interactions

    Get PDF
    Computers seem to be everywhere and to be able to do almost anything. Automobiles have Global Positioning Systems to give advice about travel routes and destinations. Virtual classrooms supplement and sometimes replace face-to-face classroom experiences with web-based systems (such as Blackboard) that allow postings, virtual discussion sections with virtual whiteboards, as well as continuous access to course documents, outlines, and the like. Various forms of “bots” search for information about intestinal diseases, plan airline reservations to Tucson, and inform us of the release of new movies that might fit our cinematic preferences. Instead of talking to the agent at AAA, the professor, the librarian, the travel agent, or the cinema-file two doors down, we are interacting with electronic social agents. Some entrepreneurs are even trying to create toys that are sufficiently responsive to engender emotional attachments between the toy and its owner

    Analysis of Laughter in Cohesive Groups

    Get PDF
    Group cohesion describes the tendency of the group members’ shared commitment to group tasks and the interpersonal attraction among them. This paper presents a preliminary analysis of occurrence of laughter with respect to group cohesion using a corpus of multi-party interactions. Results indicate that the occurrence of laughter is higher in cohesive segments and a strong positive correlation exists between the perceived level of cohesion and laughter

    Towards responsive Sensitive Artificial Listeners

    Get PDF
    This paper describes work in the recently started project SEMAINE, which aims to build a set of Sensitive Artificial Listeners – conversational agents designed to sustain an interaction with a human user despite limited verbal skills, through robust recognition and generation of non-verbal behaviour in real-time, both when the agent is speaking and listening. We report on data collection and on the design of a system architecture in view of real-time responsiveness

    TranSTYLer: Multimodal Behavioral Style Transfer for Facial and Body Gestures Generation

    Full text link
    This paper addresses the challenge of transferring the behavior expressivity style of a virtual agent to another one while preserving behaviors shape as they carry communicative meaning. Behavior expressivity style is viewed here as the qualitative properties of behaviors. We propose TranSTYLer, a multimodal transformer based model that synthesizes the multimodal behaviors of a source speaker with the style of a target speaker. We assume that behavior expressivity style is encoded across various modalities of communication, including text, speech, body gestures, and facial expressions. The model employs a style and content disentanglement schema to ensure that the transferred style does not interfere with the meaning conveyed by the source behaviors. Our approach eliminates the need for style labels and allows the generalization to styles that have not been seen during the training phase. We train our model on the PATS corpus, which we extended to include dialog acts and 2D facial landmarks. Objective and subjective evaluations show that our model outperforms state of the art models in style transfer for both seen and unseen styles during training. To tackle the issues of style and content leakage that may arise, we propose a methodology to assess the degree to which behavior and gestures associated with the target style are successfully transferred, while ensuring the preservation of the ones related to the source content

    A Touching Agent : Integrating Touch in Social Interactions between Human and Embodied Conversational Agent in an Immersive Environment

    Get PDF
    International audienceEmbodied conversational agents (ECAs) are endowed with more and more social and emotional capabilities. They can build rapport with humans by expressing thoughts and emotions through verbal and non-verbal behaviors. However, ECAs often lack the sense of touch, which is an important communicative and emotional capability, in particular in the context of an immersive environment. The sense of touch has been shown to be essential to the social development and general well-being of human beings, from their infancy to adulthood. It is thought to be a facilitator in the establishment of relationships and it is overall considered a very important medium of communication, especially for the communication of emotions. In many languages being touched designates not only physical contact but also being moved by something, feeling something on an emotional level. While in adulthood touch is usually kept for the closest relationships, it is nonetheless present in our daily lives (in greetings for example). We therefore believe that adding touch to the social capabilities of artificial agents would contribute to their ability to build rapport and emotional connections with humans. But to which extent would granting an ECA the ability to touch and be touched enhance its ability to communicate emotions and to build and maintain a social and emotional relationship with a human? To investigate this interrogation, our work focuses on the development of a touching agent: touching in the physical sense as well as the emotional sense. In the context of an immersive environment, our autonomous virtual ECA should be able to perceive and interpret touch, as well as decide how and when to perform it based on the overall interaction and emotional state of its human partner. Our first results are so far promising as to the credibility of such a touching agent

    Issues in Facial Animation

    Get PDF
    Our goal is to build a system of 3-D animation of facial expressions of emotion correlated with the intonation of the voice. Up till now, the existing systems did not take into account the link between these two features. Many linguists and psychologists have noted the importance of spoken intonation for conveying different emotions associated with speakers\u27 messages. Moreover, some psychologists have found some universal facial expressions linked to emotions and attitudes. We will look at the rules that control these relations (intonation/emotions and facial expressions/emotions) as well as the coordination of these various modes of expressions. Given an utterance, we consider how the message (what is new/old information in the given context) transmitted through the choice of accents and their placement, are conveyed through the face. The facial model integrates the action of each muscle or group of muscles as well as the propagation of the muscles\u27 movement. It is also adapted to the FACS notation (Facial Action Coding System) created by P. Ekman and W. Friesen to describe facial expressions. Our first step will be to enumerate and to differentiate facial movements linked to emotions from the ones linked to conversation. Then, we will examine what the rules are that drive them and how their different actions interact
    • 

    corecore